I’m not sure exactly what you’re asking with the second paragraph. In any case, I don’t think the Singularity Institute is dogmatically in favor of friendliness; they’ve collaborated with Nick Bostrom on thinking about Oracle AI.
Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can’t process everything.
I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors.
It’s perfectly OK to give low priors to strange beliefs, like: “Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words.” However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence.
For example, a hypothesis that “Washington is a capital city of USA” has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior.
So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So… How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don’t pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn’t it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don’t read, but you do read this one—why?)
Oh please. There’s a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics.
DH1. Ad Hominem. An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators’ salaries should be increased, one could respond:
Of course he would say that. He’s a senator.
This wouldn’t refute the author’s argument, but it may at least be relevant to the case. It’s still a very weak form of disagreement, though. If there’s something wrong with the senator’s argument, you should say what it is; and if there isn’t, what difference does it make that he’s a senator?
I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show.
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
No one is expecting you to adopt their priors… Just read and make arguments about ideas instead of people, if you’re trying to make an inference about ideas.
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
I think you discarded one of conditionals. I read Bruce Schneier’s blog. Or Paul Graham’s. Furthermore, it is not about disagreement with the notion of AI risk. It’s about keeping the data non cherry picked, or less cherry picked.
It’s not irrational, it’s just weak evidence.
I’m not sure exactly what you’re asking with the second paragraph. In any case, I don’t think the Singularity Institute is dogmatically in favor of friendliness; they’ve collaborated with Nick Bostrom on thinking about Oracle AI.
Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can’t process everything.
It’s perfectly OK to give low priors to strange beliefs, like: “Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words.” However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence.
For example, a hypothesis that “Washington is a capital city of USA” has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior.
So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So… How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don’t pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn’t it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don’t read, but you do read this one—why?)
Oh please. There’s a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics.
http://paulgraham.com/disagree.html
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
No one is expecting you to adopt their priors… Just read and make arguments about ideas instead of people, if you’re trying to make an inference about ideas.
I think you discarded one of conditionals. I read Bruce Schneier’s blog. Or Paul Graham’s. Furthermore, it is not about disagreement with the notion of AI risk. It’s about keeping the data non cherry picked, or less cherry picked.